自治车辆和机器人需要越来越多的鲁棒性和可靠性,以满足现代任务的需求。这些要求特别适用于相机,因为它们是获取环境和支持行动的信息的主要传感器。相机必须保持适当的功能,并在必要时采取自动对策。但是,几乎没有作品,审查了相机的一般情况监测方法的实际应用,并在设想的高级别应用程序中设计对策。我们为基于数据和物理接地模型的相机提出了一种通用和可解释的自我保健框架。为此,我们通过比较传统和血液的机器学习的方法,确定一种可靠的两种可靠,实时的估计,用于诸如难以释放的情况(Defocus Blur,运动模糊,不同噪声现象和最常见的噪声现象和最常见的组合)的典型图像效果广泛的实验。此外,我们展示了如何根据实验(非线性和非单调)输入 - 输出性能曲线来调整相机参数(例如,曝光时间和ISO增益)以实现最佳的全系统能力,使用对象检测,运动模糊和传感器噪声作为示例。我们的框架不仅提供了一种实用的即用的解决方案,可以评估和维护摄像机的健康,但也可以作为扩展来解决更复杂的问题的基础,以凭经验组合附加的数据源(例如,传感器或环境参数或环境参数)为了获得完全可靠和强大的机器。
translated by 谷歌翻译
Classical reinforcement learning (RL) techniques are generally concerned with the design of decision-making policies driven by the maximisation of the expected outcome. Nevertheless, this approach does not take into consideration the potential risk associated with the actions taken, which may be critical in certain applications. To address that issue, the present research work introduces a novel methodology based on distributional RL to derive sequential decision-making policies that are sensitive to the risk, the latter being modelled by the tail of the return probability distribution. The core idea is to replace the $Q$ function generally standing at the core of learning schemes in RL by another function taking into account both the expected return and the risk. Named the risk-based utility function $U$, it can be extracted from the random return distribution $Z$ naturally learnt by any distributional RL algorithm. This enables to span the complete potential trade-off between risk minimisation and expected return maximisation, in contrast to fully risk-averse methodologies. Fundamentally, this research yields a truly practical and accessible solution for learning risk-sensitive policies with minimal modification to the distributional RL algorithm, and with an emphasis on the interpretability of the resulting decision-making process.
translated by 谷歌翻译
Many real-world applications of language models (LMs), such as code autocomplete and writing assistance, involve human-LM interaction, but the main LM benchmarks are non-interactive, where a system produces output without human intervention. To evaluate human-LM interaction, we develop a framework, Human-AI Language-based Interaction Evaluation (H-LINE), that expands non-interactive evaluation along three dimensions, capturing (i) the interactive process, not only the final output; (ii) the first-person subjective experience, not just a third-party assessment; and (iii) notions of preference beyond quality. We then design five tasks ranging from goal-oriented to open-ended to capture different forms of interaction. On four state-of-the-art LMs (three variants of OpenAI's GPT-3 and AI21's J1-Jumbo), we find that non-interactive performance does not always result in better human-LM interaction and that first-person and third-party metrics can diverge, suggesting the importance of examining the nuances of human-LM interaction.
translated by 谷歌翻译
Deep Neural Networks (DNN) are becoming increasingly more important in assisted and automated driving. Using such entities which are obtained using machine learning is inevitable: tasks such as recognizing traffic signs cannot be developed reasonably using traditional software development methods. DNN however do have the problem that they are mostly black boxes and therefore hard to understand and debug. One particular problem is that they are prone to hidden backdoors. This means that the DNN misclassifies its input, because it considers properties that should not be decisive for the output. Backdoors may either be introduced by malicious attackers or by inappropriate training. In any case, detecting and removing them is important in the automotive area, as they might lead to safety violations with potentially severe consequences. In this paper, we introduce a novel method to remove backdoors. Our method works for both intentional as well as unintentional backdoors. We also do not require prior knowledge about the shape or distribution of backdoors. Experimental evidence shows that our method performs well on several medium-sized examples.
translated by 谷歌翻译
With the rise of AI in recent years and the increase in complexity of the models, the growing demand in computational resources is starting to pose a significant challenge. The need for higher compute power is being met with increasingly more potent accelerators and the use of large compute clusters. However, the gain in prediction accuracy from large models trained on distributed and accelerated systems comes at the price of a substantial increase in energy demand, and researchers have started questioning the environmental friendliness of such AI methods at scale. Consequently, energy efficiency plays an important role for AI model developers and infrastructure operators alike. The energy consumption of AI workloads depends on the model implementation and the utilized hardware. Therefore, accurate measurements of the power draw of AI workflows on different types of compute nodes is key to algorithmic improvements and the design of future compute clusters and hardware. To this end, we present measurements of the energy consumption of two typical applications of deep learning models on different types of compute nodes. Our results indicate that 1. deriving energy consumption directly from runtime is not accurate, but the consumption of the compute node needs to be considered regarding its composition; 2. neglecting accelerator hardware on mixed nodes results in overproportional inefficiency regarding energy consumption; 3. energy consumption of model training and inference should be considered separately - while training on GPUs outperforms all other node types regarding both runtime and energy consumption, inference on CPU nodes can be comparably efficient. One advantage of our approach is that the information on energy consumption is available to all users of the supercomputer, enabling an easy transfer to other workloads alongside a raise in user-awareness of energy consumption.
translated by 谷歌翻译
In this paper, we identify the best learning scenario to train a team of agents to compete against multiple possible strategies of opposing teams. We evaluate cooperative value-based methods in a mixed cooperative-competitive environment. We restrict ourselves to the case of a symmetric, partially observable, two-team Markov game. We selected three training methods based on the centralised training and decentralised execution (CTDE) paradigm: QMIX, MAVEN and QVMix. For each method, we considered three learning scenarios differentiated by the variety of team policies encountered during training. For our experiments, we modified the StarCraft Multi-Agent Challenge environment to create competitive environments where both teams could learn and compete simultaneously. Our results suggest that training against multiple evolving strategies achieves the best results when, for scoring their performances, teams are faced with several strategies.
translated by 谷歌翻译
We introduce a new benchmark dataset, Placenta, for node classification in an underexplored domain: predicting microanatomical tissue structures from cell graphs in placenta histology whole slide images. This problem is uniquely challenging for graph learning for a few reasons. Cell graphs are large (>1 million nodes per image), node features are varied (64-dimensions of 11 types of cells), class labels are imbalanced (9 classes ranging from 0.21% of the data to 40.0%), and cellular communities cluster into heterogeneously distributed tissues of widely varying sizes (from 11 nodes to 44,671 nodes for a single structure). Here, we release a dataset consisting of two cell graphs from two placenta histology images totalling 2,395,747 nodes, 799,745 of which have ground truth labels. We present inductive benchmark results for 7 scalable models and show how the unique qualities of cell graphs can help drive the development of novel graph neural network architectures.
translated by 谷歌翻译
扩散模型是一类生成模型,与其他生成模型相比,在自然图像数据集训练时,在创建逼真的图像时表现出了出色的性能。我们引入了Dispr,这是一个基于扩散的模型,用于解决从二维(2D)单细胞显微镜图像预测三维(3D)细胞形状的反问题。使用2D显微镜图像作为先验,因此可以根据预测现实的3D形状重建条件。为了在基于功能的单细胞分类任务中展示DIPPR作为数据增强工具的适用性,我们从分组为六个高度不平衡类的单元中提取形态特征。将DISPR预测的功能添加到三个少数类别,将宏F1分数从$ f1_ \ text {macro} = 55.2 \ pm 4.6 \%$ to $ f1_ \%$ to $ f1_ \ text {macro} = 72.2 \ pm 4.9 \%$。由于我们的方法是在这种情况下第一个采用基于扩散的模型的方法,因此我们证明了扩散模型可以应用于3D中的反问题,并且他们学会了从2D显微镜图像中重建具有现实的形态特征的3D形状。
translated by 谷歌翻译
用于生存预测的深层神经网络在歧视方面超过了经典方法,这是患者根据事件的秩序。相反,诸如COX比例危害模型之类的经典方法显示出更好的校准,即对基础分布事件的正确时间预测。特别是在医学领域,预测单个患者的存活至关重要,歧视和校准都是重要的绩效指标。在这里,我们提出了离散的校准生存(DC),这是一个新型的深层神经网络,用于歧视和校准的生存预测,在三个医疗数据集的歧视中优于竞争生存模型,同时在所有离散时间模型中实现最佳校准。 DC的增强性能可以归因于两个新型功能,即可变的时间输出节点间距和新颖的损耗项,可优化未经审查和审查的患者数据的使用。我们认为,DCS是临床应用基于深度学习的生存预测和良好校准的重要一步。
translated by 谷歌翻译
强化学习旨在通过与动态未知的环境的互动来学习最佳政策。许多方法依赖于价值函数的近似来得出近乎最佳的策略。在部分可观察到的环境中,这些功能取决于观测和过去的动作的完整顺序,称为历史。在这项工作中,我们从经验上表明,经过验证的复发性神经网络在内部近似于这种价值函数,从而在内部过滤了鉴于历史记录的当前状态的后验概率分布,称为信念。更确切地说,我们表明,随着经常性神经网络了解Q功能,其隐藏状态与与最佳控制相关的状态变量的信念越来越相关。这种相关性是通过其共同信息来衡量的。此外,我们表明,代理的预期回报随着其经常性架构在其隐藏状态和信念之间达到高度相互信息的能力而增加。最后,我们表明,隐藏状态与变量的信念之间的相互信息与最佳控制无关,从而通过学习过程降低。总而言之,这项工作表明,在其隐藏状态下,近似可观察到的环境的Q功能的经常性神经网络从历史上复制了足够的统计量,该统计数据与采取最佳动作的信念相关的部分相关。
translated by 谷歌翻译